Note EFSPI 9th RegStat Workshop - Part B
Chairs: Heidi Mestl (NOMA, NO; SAWP member) and Giulia Zigon (GSK, IT)
Speaker: Brett Hauber (Pfizer, US)
Research Objectives
Elicit Patient Preferences: Understanding what attributes of treatments for alopecia areata (AA) are most important to patients. This involves quantifying how patients value different aspects of treatments including efficacy and side effects.
Estimate Maximum Acceptable Risks (MAR): Determining the level of risk patients are willing to accept from JAK inhibitors, given their potential side effects such as blood clots, serious infections, and cancer, relative to the treatment benefits.
Preference Shares: Estimating the proportion of patients who would prefer a specific treatment dose (ritelcitinib 50 mg once daily) over no treatment or other dosages based on their assessed benefits and risks.
Rank Probabilities: Comparing the benefit-risk profiles of different treatment dosages (ritelcitinib 50 mg once daily vs. placebo and vs. ritelcitinib 30 mg once daily) to determine which might be preferred by patients under varying scenarios of efficacy and risk.
Discrete Choice Experiment (DCE): Utilizing DCE to gather data on patient preferences, where patients are presented with hypothetical treatment scenarios featuring different combinations of benefits and risks.
Statistical Models and Analyses:
Application of Data:
Visualization and Decision Support
The Discrete Choice Experiment (DCE) that you described is a powerful tool used in health economics and outcomes research to understand patient preferences regarding different treatment options.
Statistical Analysis and Utility Estimation
Statistical Modeling and Results Interpretation
Model Estimation: The utility for each attribute level is estimated using a more advanced form of a multinomial logit model, which accounts for the fact that the dependent variable (the chosen option) is limited to the available choices.
Utility and Preference Weights: The results show how changes in the risk levels (e.g., reduction in the probability of blood clots) and benefits (increased probability of hair regrowth) impact patients’ utility. These preference weights are crucial for understanding the trade-offs that patients are willing to make.
Relative Importance: By examining the biggest versus smallest preference weights for each attribute, you derive a measure of relative attribute importance. This metric helps quantify how much each attribute (and changes within these attributes) matters to patients compared to others.
Sample Characteristics: The typical challenges of such studies include having a sample that might not fully represent the general population, often skewing towards higher education levels or certain demographics more likely to participate in research studies. Acknowledging these limitations is crucial for interpreting the results appropriately.
Generalizability: The findings, while robust within the context of the study, need careful consideration when extrapolating to broader populations, especially given the potential educational and demographic biases.
Benefit-risk analysis in pharmaceuticals is a critical process that systematically evaluates the positive and negative effects of a medical product to determine whether its benefits to patients outweigh the associated risks. This analysis is foundational in regulatory decision-making, ensuring that only those products that provide a net positive impact on health are approved for use.
Steps in Benefit-Risk Analysis
Practical Implications
Insights from the CHMP Review
Speaker: Dr Francesco Pignatti (EMA, NL)
Why preference studies are crucial in developing more systematic and quantifiable approaches to value judgments in drug development and regulatory decisions.
Systematic Value Judgments: Preference studies help integrate systematic methods to quantify how patients value the benefits and risks associated with medical treatments. This systematic approach provides a clear, quantifiable framework that aids in making informed regulatory decisions.
Understanding Trade-offs: By quantifying patient preferences, these studies make explicit the trade-offs that patients are willing to make between the benefits of a drug and its potential harms. This is vital in understanding the real-world implications of treatment choices and ensuring that these choices align with patient values.
Benefit-Risk Balance Formulations
Practical Applications
Regulatory Decisions: In regulatory contexts, these studies provide evidence that supports or refutes the approval of new drugs based on a balance that reflects patient-centered outcomes.
Drug Development: During the drug development process, understanding the weights that patients place on different outcomes can guide clinical trial design, such as choosing endpoints that align with patient priorities.
Personalized Medicine: Over time, these methods can contribute to more personalized approaches to treatment, where decisions about drug use are tailored to individual preferences and risk tolerances.
Role of patient preferences in benefit-risk decisions within healthcare, distinguishing between two primary models of decision-making: Directive and Informed Choice. Here’s a breakdown of each model and its implications for regulatory practices and patient care:
Both models stress the importance of aligning information and communication strategies with patient values:
1. Directive Decision Making (“Regulator knows best”)
This model assumes that regulators have the best understanding of what constitutes acceptable risk levels and appropriate treatments based on scientific and medical evidence.
The detailed overview you shared captures the evolving landscape of integrating patient preferences into regulatory frameworks, specifically under the new ICH guideline E22. This guideline aims to formalize the approach to using patient preference studies in drug development and regulatory decision-making.
Objectives of E22
Expected Developments
Implementation Challenges
Broader Implications for the Pharmaceutical Industry
Speakers: Elina Asikanius (fimea, FI; SAWP member) and Mouna Akacha (Novartis, CH)
Focus on Relevance and Reliability - Relevance: The data must pertain directly to the patient benefits and treatment efficacy to be considered for inclusion in Section 5.1. - Reliability: Data should be trustworthy, robustly analyzed, and free of bias to guide healthcare professionals and patients in treatment decisions effectively.
Statistical Assessor’s Role: - Assessors are tasked with verifying the reliability of information in Section 5.1, focusing primarily on the data’s statistical integrity rather than its clinical relevance, which is typically determined by clinical experts.
Common Challenges in Data Reliability: - Study Design: Assessors begin by evaluating whether the study’s design adequately addresses the research question, ensuring that the study can validly analyze the intended endpoints. - Data Collection and Analysis: Alignment with the study’s objectives is crucial, along with appropriate handling of multiplicity and bias through well-established scientific methods like randomization and blinding.
Evaluation of Results: - Strong, clear treatment effects are more convincing and can mitigate concerns about minor procedural uncertainties. - Imbalances in patient disposition and adverse events should be routinely reported and analyzed to provide a full picture of the treatment’s impact and safety profile.
Impact of Study Design on Data Inclusion - A suboptimal study design can significantly affect whether study results are included in Section 5.1. If the design does not adequately address the research question or if it leads to compromised data integrity, the data may not be included. - Adjustments during the study, such as allowing changes in treatment dosing or protocol deviations, need to be carefully managed to prevent compromising the study’s integrity.
Chairs: Lukas Aguirre Dávila (PEI, DE; SAWP member) and Pierre Mancini (Sanofi, FR)
Speaker: Sarianne Päivike,(fimea, FI)
The continued evolution of the ICH E6 guidelines underlines a shift in the clinical research paradigm towards more dynamic and responsive practices. By focusing on adaptability and risk-based approaches, the guidelines aim to ensure that clinical trials can effectively incorporate technological advances and innovative methodologies, while still safeguarding participant safety and ensuring the reliability of trial results.
Speaker: Juha-Pekka Perttola (Roche, CH)
openstatsware is a scientific working group of the American Statistical Association (ASA) Biopharmaceutical section (BIOP) and a European Special Interest Group (SIG) sponsored by Statisticians in the Pharmaceutical Industry (PSI) and the European Federation of Statisticians in the Pharmaceutical Industry (EFSPI).
Goals are to:
Engineer selected R-packages to fill in gaps in the open-source statistical software landscape, and to promote software tools designed by the working group through publications, conference presentations, workshops, and training courses.
Develop good SWE practices for engineering high-quality statistical software and promote their use in the broader Biostatistics community via public training materials.
Communicate and collaborate with other R software initiatives including via the R Consortium.
Brief history of R submissions
“Just because they say it’s impossible doesn’t mean you can’t do it.” — Roger Bannister
Use of Open Source languages in FDA NDAs (by Phil
Bowsher)
GitHub -
philbowsher/Open-Source-in-New-Drug-Applications-NDAs-FDA
Novo Nordisk’s Journey to an R based FDA Submission (September 12th 2023)
Roche’s End-to-End R Journey to Submission (September 10th 2024)
What other pieces of the puzzle were needed? (Roche/GNE perspective)
A connected network of companies and individuals working to promote collaborative development of curated open source R packages for clinical reporting usage in pharma, in a space where previously we would only ever have worked in silos on our own closed source and often duplicative solutions. Adopting shared solutions in this post-competitive space should ultimately ease regulatory review, resulting in bringing new treatments to patients faster.
Speaker: Paul Schuette (FDA, US, Virtual)
Statistical Software Clarifying Statement
FDA does not require use of any specific software for statistical analyses, and statistical software is not explicitly discussed in Title 21 of the Code of Federal Regulations [e.g., in 21CFR part 11]. However, the software package(s) used for statistical analyses should be fully documented in the submission, including version and build identification.
The information presented highlights various aspects of regulatory requirements and best practices for the use of software in clinical trials, specifically under the guidance of the FDA and relevant regulations.
.R, .Markdown, .Rmd,
.py, .jl are accepted in specific modules,
highlighting the FDA’s adaptability to contemporary statistical
programming languages and tools.The description paints a picture of R’s evolving role in regulatory submissions, highlighting the collaborative efforts within the R community to standardize and validate its use for FDA submissions. This includes developing frameworks and tools that ensure the reliability and regulatory compliance of R packages and applications, supporting their use in critical areas like pharmaceuticals. This collaborative effort, supported by structured governance and active community involvement, helps maintain R’s relevance and utility in statistical and regulatory applications.
Working Groups and Initiatives:
Pilot Projects:
Pilot Projects Using R: - Pilot 1: Utilized a standard Kaplan-Meier survival plot. Kaplan-Meier plots are used to estimate survival functions from lifetime data and are fundamental in clinical trial analysis for assessing treatment effects over time. - Pilot 2: Featured an interactive Kaplan-Meier plot developed using a Shiny application. Shiny is an R package that allows for building interactive web apps straight from R. This interactive feature likely enables users to manipulate variables and filters to view different aspects of the data dynamically.
Concerns with Exploratory Analysis:
Usage of Python in Statistical and Data-Driven Applications:
The discussion revolves around the use of open-source software for regulatory submissions, specifically focusing on enhancing the presentation and interactivity of data outputs such as tables and figures. Here’s a summary of the key points and perspectives shared during the session:
The discussion appears to focus on addressing questions regarding the documentation and risk management practices associated with using open-source software packages within a consortium-led project. Here’s a summary of the key points raised:
I think one point that maybe didn’t really clearly come across is that that’s quite an achievement for the overall community. If we think we’re generally overall community started maybe something like 10 years ago, nobody was lots of.People are not seeing that this is possible and but some people believe conservation and actually made it possible, which I think speaks a lot about this community.I think that also shows that we can do much, much bigger thing. And there’s one thing that kind of is in my mind when this is all open source and this is not.Competitive anymore like competitive proprietary macros that are developed internally. What do you think is a burden of a hurdle to make the whole documentation of a filing publicly available?
Shift from Proprietary to Open Source: There’s a noted shift from competitive proprietary software, developed internally, to more open-source solutions. This transition is significant because open-source software fosters collaboration and shared advancements rather than competitive secrecy.
Public Documentation of Filings: The discussion raises a thought-provoking question about making the documentation of regulatory filings publicly available. This transparency would allow external parties to fully understand the methodologies and analyses conducted, rather than just having access to protocols and results.
Construction Pilot Projects: In response to the question about public documentation, it’s mentioned that the ongoing pilot projects are designed to address this issue of openness and transparency. These pilots aim to make both the software and the data sets accessible to the public.
GitHub and Accessibility: The projects utilize GitHub for sharing software and data, enhancing accessibility and enabling external parties to see exactly how analyses are performed.
Evolution of Reporting Practices: The discussion also touches on how these projects are evolving in terms of documentation and reporting. There is an ongoing dialogue on how to best document and share the learnings from these pilots, including both exploratory steps and recommended practices.
Questions Raised:
Answers Provided:
Chairs: Aysun Cetinyurek Yavuz (Dutch Medicine Evaluation Board, NL) and Julie Jones (Novartis, CH)
Speaker: Kit Roes (Chair of MWP EMA, NL)
Question: Considering the community’s progress, the question arose regarding the decision-making process behind the guidance on establishing efficacy based on single large trials. There is a curiosity about whether this approach, while sometimes seen as inferior, was thoroughly considered given its potential limitations.
Details
The speaker is outlining how feedback from statisticians and other stakeholders has shaped the direction of the 2025-2028 workplan for clinical trials and biostatistics. Key focus areas include the use of external controls, the development of guidelines for synthetic covariates, and complex modeling approaches for pharmacology-based studies. While these topics are high-priority, there are other areas, such as bioequivalence and biosimilarity, that will be addressed in the coming years. The speaker emphasizes the need for statisticians to stay engaged in these areas, as their input is critical to developing robust, scientifically sound methodologies for drug development.
Speaker:Greg Levin (FDA, US, Virtual)
The FDA is responsible for protecting and promoting public health in the United States by regulating a wide range of products, including:
CDER (Center for Drug Evaluation and Research) ensures that safe and effective drugs are made available to the public. It is one of the key centers within the FDA that focuses on drug regulation.
The Office of Biostatistics (OB) provides statistical leadership, expertise, and advice to support CDER’s mission. Specifically, it is involved in:
An article mentioned in the Spring 2024 ASA Biopharmaceutical Report provides insights into OB’s approach to developing statistical policy and guidance, including opportunities for external engagement.
Speaker: David McConnell (National Centre for Pharmacoeconomics, IE)
Chair: Katrin Kupas (BMS, CH)
Speaker: David McConnell (National Centre for Pharmacoeconomics, IE)
Speaker: Antonio Remiro Azócar (Novo Nordisk, ES)
The necessity of covariate adjustment in indirect treatment comparisons (ITCs), especially in the context of the evolving EU HTA framework. The challenge lies in advancing the methodological innovations while ensuring proper implementation and reporting. The distinction between anchored and unanchored comparisons is critical, with covariate adjustment playing a pivotal role in addressing biases and enhancing the robustness of ITCs. By improving the application of fit-for-purpose methods and promoting transparent reporting, ITCs can provide more reliable comparisons, leading to better decision-making in regulatory and HTA processes across Europe.
strongly supports the need for covariate adjustment in ITCs, particularly under the EU HTA regulation. The regulation introduces complexities, such as variability in comparators and target populations across member states, that can only be adequately addressed through robust covariate-adjusted methods.
Covariate adjustment is essential to: - Relax the unrealistic assumption of unconditional exchangeability. - Reduce bias due to differences in baseline covariates. - Generate more relevant and precise estimates for specific target populations, which are crucial for policy and reimbursement decisions. - Address uncertainty from cross-study differences, thereby providing more reliable estimates.
As you pointed out, despite methodological advances, there’s still a gap in implementation, and standardization of covariate-adjusted ITCs across Europe will be critical. The next steps should focus on refining these methods, ensuring transparent reporting, and aligning the approaches used across different HTA bodies to meet the specific needs of EU member states.
By improving the transparency and robustness of covariate-adjusted methods, the field can better support reimbursement and policy decisions across diverse healthcare settings in Europe, ultimately enhancing the quality and relevance of indirect treatment comparisons.
While covariate adjustment methods continue to evolve, the challenge is to balance methodological innovation with transparency and trustworthiness in HTA decision-making. Methods like MLNMR and doubly robust frameworks provide a path forward by offering improved robustness and flexibility, but they come with challenges related to complexity, subjectivity, and transparency. Data-adaptive estimators bring additional power to this framework, but must be applied with careful consideration to avoid overcomplicating the analysis and introducing new sources of bias. Ultimately, the key to successful implementation of these methods lies in improving reporting practices, conducting rigorous sensitivity analyses, and ensuring that decision-makers have the information they need to trust the results produced by these advanced methodologies.
There are three main criticisms of covariate-adjusted ITCs:
1. Increased Complexity and Researcher Degrees of Freedom: - Covariate adjustment introduces additional complexity, requiring multiple decisions about methods and models. Some of these decisions include: - Whether to use weighting or outcome modeling. - Which covariates to adjust for, and whether to adjust means or higher-order moments. - Deciding on the specification of the outcome model, handling missing data, or imputing covariates. - These decisions, referred to as researcher degrees of freedom, are often seen as subjective and introduce a level of flexibility that, when not properly reported, can lead to distrust in the results.
2. Poor Reporting Standards: - Increased complexity should necessitate clearer and more transparent reporting, but in practice, reporting of covariate adjustments has been lacking. - Issues include: - Lack of transparency on how covariates are selected (i.e., whether they are effect modifiers or prognostic variables). - Covariate selection is often based on availability from published data rather than being rigorously pre-specified or justified. - There is often little detail about the model selection process, the diagnostics of the model, or whether the model fit was checked. - Failing to provide this critical information leaves questions about whether the models are correctly specified and whether the extrapolation of these models is valid.
3. Publication Bias: - A significant issue is publication bias in favor of covariate-adjusted approaches, where researchers may selectively report favorable results for these methods. - The review you cited shows a standardized difference between unadjusted and covariate-adjusted approaches, with a noticeable bias toward the latter. This could be because covariate adjustment does not always require subject-level data, making it easier to perform, while unadjusted methods may favor simpler, familiar approaches that are less complex but more transparent. - This bias further complicates the trust in covariate-adjusted methods, particularly when they are applied to datasets that don’t have complete or high-quality covariate data.
Why Simpler Methods May Be Preferred: - Assessors and HTA bodies may prefer simpler methods like unadjusted meta-analyses or risk of bias tools because they are more familiar, perceived as transparent, and involve fewer subjective decisions. - These methods are often easier to explain and provide clear, downgraded evidence when biases are found, whereas covariate adjustment methods introduce complexities that can be harder to communicate and justify, especially when the model is not fully reported.
Need for Improved Implementation and Reporting: - While covariate-adjusted methods have evolved, implementation and reporting have not kept up. To improve trust in these methods, the focus should be on: - Better reporting standards: Clearly explaining how covariates are chosen, how models are specified, and ensuring that diagnostics and robustness checks are adequately performed and reported. - Balancing complexity: While the methodology must continue to evolve, it’s crucial to also promote clear, understandable reporting practices so that covariate-adjusted methods don’t lose credibility due to their perceived opacity. - By addressing these gaps, covariate adjustment can be integrated more effectively into ITC methodologies, especially within the context of EU HTA and similar regulatory frameworks.
The EU HTA regulation will introduce significantly greater analytical complexity due to the increased number and sophistication of Indirect Treatment Comparisons (ITCs). A major focus will be on developing covariate-adjusted ITCs, particularly to address highly variable PICO requirements across different target populations and subpopulations.
There is an urgent need to: - Promote the development of more bias-robust methods for covariate-adjusted ITCs. - Train statisticians in advanced ITC methods, both in industry and within HTA bodies, to ensure they keep pace with evolving methodologies. - Strengthen and update best-practice guidelines and improve reporting recommendations regularly. - Expand capacity and resources in ITC methodologies within industry and HTA bodies. - Enhance the capabilities of HTA staff and committee members in assessing ITC methodologies.
Prospective ITC planning—incorporating ITC methods at the trial design stage as part of an HTA-specific analysis plan—could help manage the complexity. By pre-specifying approaches early on, statisticians can ensure that their methods are transparent and their choices clear, facilitating reproducibility and reducing researcher degrees of freedom.
Lastly, ongoing engagement between HTA agencies and industry will be essential in ensuring a smooth transition into this more complex regulatory framework, fostering collaboration, and improving the transparency of the PICO selection process.
Chair: Sandro Gsteiger (Roche, CH)
Speaker: Lara Wolfson (MSD, CH)
Regulatory
Common Source of Outcome (Absolute): Regulators ask whether a product is safe and effective in absolute terms. Their goal is to ensure that the product itself meets the basic standards of safety, efficacy, and quality.
Requirements are transparent and minimal: Regulatory processes are designed to be transparent and relatively straightforward, with predefined standards. They acknowledge that clinical trials may have limitations by design, but focus primarily on ensuring the product is fundamentally safe and effective.
HTA (Health Technology Assessment):
Outcome is Contextual (Relative): HTAs take a broader and more relative view, comparing the new product to existing alternatives. They ask how much more effective or safer a product is compared to others, and whether it is suitable for specific healthcare settings.
Opaque and Intensive Requirements: HTA processes tend to be more complex, with intensive evaluations. They dig deeper into aspects like increased benefit, cost-effectiveness, and meaningful added value.
HTA processes may also involve requests for additional analysis or reanalysis of clinical data to understand these outcomes more contextually.
Summary:
Different Systems, Different Approaches: - Data Requirements: Some systems focus on comparative effectiveness (how a drug performs relative to others), while others prioritize cost-effectiveness (whether the added benefit is worth the cost). - Timings: The timing of HTA evaluations and how they align with regulatory approvals varies between countries. - Nature of HTA Recommendations: In some countries, HTA recommendations are binding (e.g., Brazil, France, Germany), meaning they must be followed, while in others they are non-binding (e.g., Australia, Canada, Denmark). - Relationship with Regulatory Processes: There is variation in how HTA recommendations interact with regulatory approvals, especially in the way they influence pricing and reimbursement decisions.
Key characteristics of HTA processes in different countries. Let’s break it down:
The image explains how Randomized Controlled Trial (RCT) data is utilized in Health Technology Assessment (HTA) dossiers and what analyses are typically performed by HTA statisticians. L
This image explains why Indirect Treatment Comparisons (ITCs) are used in Health Technology Assessment (HTA) submissions,
detailing the comparative evidence options available when a direct comparison is not possible.
Comparative Evidence Options for HTA Submissions:
Pathways for Indirect Comparison:
1. ITC Feasibility: - Question: Is there a published, comparator RCT available that is suitable for comparison with the new treatment? - Yes: If a suitable comparator RCT exists, conduct an ITC (Indirect Treatment Comparison) based on published RCTs. - No: If no suitable comparator RCT exists, proceed to the next option.
2. ECA Feasibility: - Question: Is there suitable and robust real-world data (RWD) from a similar population that can be used as an external control arm (ECA)? - Yes: If suitable RWD is available, conduct an ECA (External Control Arm) comparison based on real-world data. - No: If neither an RCT nor RWD is feasible for indirect comparison, further methods will be needed to fill the gap.
In Summary
In this evolving landscape, statisticians face increasing challenges, particularly in how to conduct Indirect Treatment Comparisons (ITCs) that satisfy the varying requirements of different countries. The EUHTA regulation is set to streamline this process, providing a more unified framework across the EU, and potentially reducing the time between marketing approval and reimbursement decisions. By addressing these challenges and embracing the new framework, stakeholders aim to provide better, faster access to healthcare technologies for patients across Europe.
Summary:
The new EUHTA Joint Clinical Assessment (JCA) process creates a centralized framework for clinical data submissions across the EU, but each country still maintains the power to make national-level decisions on pricing, reimbursement, and access. The process revolves around the PICO framework, with policy-driven rather than evidence-driven decisions, and presents challenges for developers, particularly because they aren’t involved in determining PICO criteria.
However, there is some flexibility in methods and approaches within the JCA dossier, allowing developers to make justified decisions about how they present their evidence. The goal of the EUHTA is to streamline access to health technologies, but the complexities of national and EU-level processes mean that developers and statisticians will need to carefully plan their submissions to meet the diverse needs of EU member states.
The upcoming EUHTA process presents significant challenges, particularly around the tight timelines for JCA submissions and the uncertainty surrounding PICO requirements. Drug developers will need to be proactive in preparing for a range of scenarios, especially as indirect treatment comparisons (ITCs) are likely to play a major role in the analysis. The complexity is heightened by the fact that different stakeholders—drug developers, regulators, HTAs, and patients—have different definitions of populations and comparators, making it essential to approach analysis in a flexible yet thorough way.
In the end, developers will need to balance the structured requirements of the EUHTA with the flexibility to adjust to the final PICOs, ensuring that their submission meets the expectations of both regulatory bodies and the patients who will ultimately benefit from these treatments.
Details
Speaker: Wim Goettsch (Utrecht University and SUSTAIN-HTA, NL)